Goto

Collaborating Authors

 fused lasso


Testing for Differences in Gaussian Graphical Models: Applications to Brain Connectivity

Eugene Belilovsky, Gaël Varoquaux, Matthew B. Blaschko

Neural Information Processing Systems

Functional brain networks are well described and estimated from data with Gaussian Graphical Models (GGMs), e.g. using sparse inverse covariance estimators. Comparing functional connectivity of subjects in two populations calls for comparing these estimated GGMs. Our goal is to identify differences in GGMs known to have similar structure. We characterize the uncertainty of differences with confidence intervals obtained using a parametric distribution on parameters of a sparse estimator. Sparse penalties enable statistical guarantees and interpretable models even in high-dimensional and low-sample settings. Characterizing the distributions of sparse models is inherently challenging as the penalties produce a biased estimator.





Spacing Test for Fused Lasso

Tasaka, Rieko, Kimura, Tatsuya, Suzuki, Joe

arXiv.org Artificial Intelligence

Detecting changepoints in a one-dimensional signal is a classical yet fundamental problem. The fused lasso provides an elegant convex formulation that produces a stepwise estimate of the mean, but quantifying the uncertainty of the detected changepoints remains difficult. Post-selection inference (PSI) offers a principled way to compute valid $p$-values after a data-driven selection, but its application to the fused lasso has been considered computationally cumbersome, requiring the tracking of many ``hit'' and ``leave'' events along the regularization path. In this paper, we show that the one-dimensional fused lasso has a surprisingly simple geometry: each changepoint enters in a strictly one-sided fashion, and there are no leave events. This structure implies that the so-called \emph{conservative spacing test} of Tibshirani et al.\ (2016), previously regarded as an approximation, is in fact \emph{exact}. The truncation region in the selective law reduces to a single lower bound given by the next knot on the LARS path. As a result, the exact selective $p$-value takes a closed form identical to the simple spacing statistic used in the LARS/lasso setting, with no additional computation. This finding establishes one of the rare cases in which an exact PSI procedure for the generalized lasso admits a closed-form pivot. We further validate the result by simulations and real data, confirming both exact calibration and high power. Keywords: fused lasso; changepoint detection; post-selection inference; spacing test; monotone LASSO

  Country: Asia > Japan > Honshū > Kansai > Osaka Prefecture > Osaka (0.04)
  Genre: Research Report (1.00)


pared: Model selection using multi-objective optimization

Das, Priyam, Robinson, Sarah, Peterson, Christine B.

arXiv.org Machine Learning

Motivation: Model selection is a ubiquitous challenge in statistics. For penalized models, model selection typically entails tuning hyperparameters to maximize a measure of fit or minimize out-of-sample prediction error. However, these criteria fail to reflect other desirable characteristics, such as model sparsity, interpretability, or smoothness. Results: We present the R package pared to enable the use of multi-objective optimization for model selection. Our approach entails the use of Gaussian process-based optimization to efficiently identify solutions that represent desirable trade-offs. Our implementation includes popular models with multiple objectives including the elastic net, fused lasso, fused graphical lasso, and group graphical lasso. Our R package generates interactive graphics that allow the user to identify hyperparameter values that result in fitted models which lie on the Pareto frontier.



Minmax Trend Filtering: A Locally Adaptive Nonparametric Regression Method via Pointwise Min Max Optimization

Chatterjee, Sabyasachi

arXiv.org Artificial Intelligence

Trend Filtering is a nonparametric regression method which exhibits local adaptivity, in contrast to a host of classical linear smoothing methods. However, there seems to be no unanimously agreed upon definition of local adaptivity in the literature. A question we seek to answer here is how exactly is Fused Lasso or Total Variation Denoising, which is Trend Filtering of order $0$, locally adaptive? To answer this question, we first derive a new pointwise formula for the Fused Lasso estimator in terms of min-max/max-min optimization of penalized local averages. This pointwise representation appears to be new and gives a concrete explanation of the local adaptivity of Fused Lasso. It yields that the estimation error of Fused Lasso at any given point is bounded by the best (local) bias variance tradeoff where bias and variance have a slightly different meaning than usual. We then propose higher order polynomial versions of Fused Lasso which are defined pointwise in terms of min-max/max-min optimization of penalized local polynomial regressions. These appear to be new nonparametric regression methods, different from any existing method in the nonparametric regression toolbox. We call these estimators Minmax Trend Filtering. They continue to enjoy the notion of local adaptivity in the sense that their estimation error at any given point is bounded by the best (local) bias variance tradeoff.


Untangling Lariats: Subgradient Following of Variationally Penalized Objectives

Mo, Kai-Chia, Shalev-Shwartz, Shai, Shártov, Nisæl

arXiv.org Artificial Intelligence

We describe a novel subgradient following apparatus for calculating the optimum of convex problems with variational penalties. In this setting, we receive a sequence $y_i,\ldots,y_n$ and seek a smooth sequence $x_1,\ldots,x_n$. The smooth sequence attains the minimum Bregman divergence to an input sequence with additive variational penalties in the general form of $\sum_i g_i(x_{i+1}-x_i)$. We derive, as special cases of our apparatus, known algorithms for the fused lasso and isotonic regression. Our approach also facilitates new variational penalties such as non-smooth barrier functions. We next derive and analyze multivariate problems in which $\mathbf{x}_i,\mathbf{y}_i\in\mathbb{R}^d$ and variational penalties that depend on $\|\mathbf{x}_{i+1}-\mathbf{x}_i\|$. The norms we consider are $\ell_2$ and $\ell_\infty$ which promote group sparsity. Last but not least, we derive a lattice-based subgradient following for variational penalties characterized through the output of arbitrary convolutional filters. This paradigm yields efficient solvers for problems in which sparse high-order discrete derivatives such as acceleration and jerk are desirable.